Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Opt Express ; 30(6): 8571-8591, 2022 Mar 14.
Artigo em Inglês | MEDLINE | ID: mdl-35299308

RESUMO

Acquiring the 3D geometry of objects has been an active research topic, wherein the reconstruction of transparent objects poses a great challenge. In this paper, we present a fully automatic approach for reconstructing the exterior surface of a complex transparent scene. Through scanning a line laser by a galvo-mirror, images of the scene are captured from two viewing directions. Due to the light transmission inside the transparent object, the captured feature points and the calibrated laser plane can produce large number of 3D point candidates with large incorrect points through direct triangulation. Various situations of laser transmission inside the transparent object are analyzed and the reconstructed 3D laser point candidates are classified into two types: first-reflection points and non-first-reflection points. The first-reflection points means the first reflected laser points on the front surface of measured objects. Then, a novel four-layers refinement process is proposed to extract the first-reflection points step by step from the 3D point candidates through optical geometric constraints, including (1) Layer-1 : fake points removed by single camera, (2) Layer-2 : ambiguity points removed by the dual-camera joint constraint, (3) Layer-3 : retrieve the missing first-reflection exterior surface points by fusion and (4) Layer-4 : severe ambiguity points removed by contour-continuity. Besides, a novel calibration model about this imaging system is proposed for 3D point candidates reconstruction through triangulation. Compared with traditional laser scanning method, we pulled in the viewing angle information of the second camera and a novel four-layers refinement process is adopted for reconstruction of transparent objects. Various experiments on real objects demonstrate that proposed method can successfully extract the first-reflection points from the candidates and recover the complex shapes of transparent and semitransparent objects.

2.
IEEE Trans Image Process ; 31: 2106-2121, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35167454

RESUMO

Three-dimensional (3D) reconstruction of dynamic objects has broad applications, including object recognition and robotic manipulation. However, achieving high-accuracy reconstruction and robustness to motion simultaneously is a challenging task. In this paper, we present a novel method for 3D reconstruction of dynamic objectS, whose main features are as follows. Firstly, a structured-light multiplexing method is developed that only requires 3 patterns to achieve high-accuracy encoding. Fewer projected patterns require shorter image acquisition time, thus, the object motion is reduced in each reconstruction cycle. The three patterns, i.e. spatial-temporally encoded patterns, are generated by embedding a specifically designed spatial-coded texture map into the temporal-encoded three-step phase-shifting fringes. A temporal codeword and three spatial codewords are extracted from the composite patterns using a proposed extraction algorithm. The two types of codewords are utilized separately in stereo matching: the temporal codeword ensures the high accuracy, while the spatial codewords are responsible for removing phase ambiguity. Secondly, we aim to eliminate the reconstruction error induced by motion between frames abbreviated as motion induced error (MiE). Instead of assuming the object to be static when acquiring the 3 images, we derive the motion of projection pixels among frames. Using the extracted spatial codewords, correspondences between different frames are found, i.e. pixels with the same codewords are traceable in the image sequences. Therefore, we can obtain the phase map at each image-acquisition moment without being affected by the object motion. Then the object surfaces corresponding to all the images can be recovered. Experimental results validate the high reconstruction accuracy and precision of the proposed method for dynamic objects with different motion speeds. Comparative experiments show that the presented method demonstrates superior performance with various types of motion, including translation in different directions and deformation.

3.
Artigo em Inglês | MEDLINE | ID: mdl-37015702

RESUMO

Sleep monitoring typically requires the uncomfortable and expensive polysomnography (PSG) test to determine the sleep stages. Body movement and cardiopulmonary signals provide an alternative way to perform sleep staging. In recent years, long-short term memory (LSTM) networks and convolutional neural networks (CNN) have dominated automatic sleep staging due to their better learning ability than machine learning classifiers. However, LSTM may lose information when dealing with long sequences, while CNN is not good at sequence modeling. As an improvement, we develop a hierarchical attention-based deep learning method for sleep staging using body movement, electrocardiogram (ECG), and abdominal breathing signals. We apply the multi-head self-attention to model the global context of feature sequences and coupled it with CNN to achieve a hierarchical self-attention weight assignment. We evaluate the performance of the method using two public datasets. Our method outperforms other baselines in the three sleep stages, achieving an accuracy of 84.3%, an F1 score of 0.8038, and a Cohen's Kappa coefficient of 0.7036. The result demonstrates the effectiveness of the hierarchical self-attention mechanism when processing feature sequences in the sleep stage classification problem. This paper provides new possibilities for long-term sleep monitoring using movement and cardiopulmonary signals obtained from non-invasive devices. The code can be found at: https://github.com/scutrd/attention-sleep-staging.

4.
Appl Opt ; 59(29): 9259-9271, 2020 Oct 10.
Artigo em Inglês | MEDLINE | ID: mdl-33104641

RESUMO

Three-dimensional (3D) vision plays an important role in industrial vision, where occlusion and reflection have made it challenging to reconstruct the entire application scene. In this paper, we present a novel 3D reconstruction framework to solve the occlusion and reflection reconstruction issues in complex scenes. A dual monocular structured light system is adopted to obtain the point cloud from different viewing angles to fill the missing points in the complex scenes. To enhance the efficiency of point cloud fusion, we create a decision map that is able to avoid the reconstruction of repeating regions of the left and right system. Additionally, a compensation method based on the decision map is proposed for reducing the reconstruction error of the dual monocular system in the fusion area. Gray-code and phase-shifting patterns are utilized to encode the complex scenes, while the phase-jumping problem at the phase boundary is avoided by designing a unique compensation function. Various experiments including accuracy evaluation, comparison with the traditional fusion algorithm, and the reconstruction of real complex scenes are conducted to validate the method's accuracy and the robustness to the shiny surface and occlusion reconstruction problem.

5.
Appl Opt ; 56(27): 7741-7748, 2017 Sep 20.
Artigo em Inglês | MEDLINE | ID: mdl-29047756

RESUMO

Depth sensing is a basic issue in three-dimensional computer vision, and structured light is one of the most prevailing methods for it. However, complex surroundings and strong ambient illumination are fairly unfavorable to depth sensing based on structured light. Complex surroundings increase computation overhead and require extra effort to be separated from the target object. Strong ambient illumination is unfavorable to the signal-noise ratio of structured light and, thus, increases the difficulty of decoding. In this paper, we propose that the polarization-coded structured light is capable of target enhanced depth sensing in ambient illumination. We present the polarimetric principle, an improved algorithm of polarization-coded structured light, and signal-noise-ratio analysis in ambient illumination. Experimental results show that polarization-coded structured light is efficient and robust for target depth sensing of a complicated environment. The polarization-coded structured light is promising to the target depth sensing in an outdoor scenario and industrial inspection.

6.
Phys Rev E Stat Nonlin Soft Matter Phys ; 75(3 Pt 2): 036710, 2007 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-17500829

RESUMO

Contact detection is a general problem of many physical simulations. This work presents a O(N) multigrid method for general contact detection problems (MGCD). The multigrid idea is integrated with contact detection problems. Both the time complexity and memory consumption of the MGCD are O(N). Unlike other methods, whose efficiencies are influenced strongly by the object size distribution, the performance of MGCD is insensitive to the object size distribution. We compare the MGCD with the no binary search (NBS) method and the multilevel boxing method in three dimensions for both time complexity and memory consumption. For objects with similar size, the MGCD is as good as the NBS method, both of which outperform the multilevel boxing method regarding memory consumption. For objects with diverse size, the MGCD outperform both the NBS method and the multilevel boxing method. We use the MGCD to solve the contact detection problem for a granular simulation system based on the discrete element method. From this granular simulation, we get the density property of monosize packing and binary packing with size ratio equal to 10. The packing density for monosize particles is 0.636. For binary packing with size ratio equal to 10, when the number of small particles is 300 times as the number of big particles, the maximal packing density 0.824 is achieved.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...